Conversation
…act, subagent propagation Register a full OpenClaw Context Engine alongside the memory slot. When activated via plugins.slots.contextEngine: "nowledge-mem": - assemble() injects behavioral guidance + recalled memories via systemPromptAddition (cache-friendly system-prompt space) - afterTurn() captures threads + triage/distill every turn (not just session end) - compact() enhances compaction instructions with saved knowledge graph context so key decisions survive summarization - prepareSubagentSpawn() propagates Working Memory + recalled memories to child sessions automatically Hooks remain as backward-compatible fallback when CE is not active. Also fixes recall hook: prependContext → appendSystemContext (cache-friendly). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
New users with no existing memories got zero guidance from the plugin — buildMemoryContextBlock returned empty when both WM and search were empty, so the AI never learned about Nowledge Mem tools. Now the 4-line BEHAVIORAL_GUIDANCE constant is always injected on the first message of every thread, regardless of recall results. Also removes generated_at timestamp from injected context (gratuitous per-turn variance, no purpose). Bump to v0.6.4. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
New skill detects the current agent, verifies nmem setup, and guides native plugin installation for richer features (auto-recall, auto-capture, graph tools). All existing skills now include a footer pointing agents to check-integration and the integrations docs page. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Documents which injection methods are cache-safe (appendSystemContext, systemPromptAddition) vs cache-breaking (prependContext) to prevent future regressions. Minor formatting fix in client.js. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…guide - integrations.json: single source of truth for all 13 integrations (capabilities, transport, tool naming, thread save, install, detection) - shared/behavioral-guidance.md: unified heuristics for WM, search, autonomous save, retrieval routing, thread save honesty - docs/PLUGIN_DEVELOPMENT_GUIDE.md: rules for new plugin authors (transport, tool naming, skill alignment, capabilities checklist) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ance
- New status skill: nmem --json status for connection diagnostics
- distill-memory: add autonomous save encouragement ("do not wait to
be asked"), structured save fields (unit-type, labels, importance),
quality bar (skip routine, importance scale)
- search-memory: add contextual signals (debugging, architecture,
implicit recall language)
- check-integration: corrected install commands for all 8 agents,
references integrations.json as canonical source
- CHANGELOG: 0.6.0 entry
- README: add status skill to list
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
All plugins now share consistent behavioral heuristics aligned with community/shared/behavioral-guidance.md: - distill-memory: "Save proactively... Do not wait to be asked" added to Claude Code, Droid, Cursor, Gemini CLI, Codex - search-memory: contextual signals added to Droid, Cursor - Bub _GUIDANCE_BASE: strengthened autonomous save language - Codex AGENTS.md + distill.md: proactive save + add-vs-update - Alma + OpenClaw: already had correct language (verified, no changes) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Registry section links to integrations.json, shared guidance, and plugin development guide as the single sources of truth - Bub plugin added to integration table (was missing) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…tries - integrations.json: npx-skills version 0.5.0 → 0.6.0 (matches CHANGELOG) - integrations.json: add Antigravity + Windsurf trajectory extractors (were in README but missing from canonical registry) - Claude Code distill-memory: remove redundant "Proactive Save" section (mandate moved to opening line; avoids duplication with "Suggestion") - Cursor search-memory: remove duplicate "ambiguous result" line from Contextual Signals (already present in Strong Triggers) - Bub plugin.py: update token budget comment ~50 → ~70 (matches actual guidance length after autonomous save strengthening) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Registry: - Claude Code version 0.7.1 → 0.7.2 (matches CHANGELOG) - Droid/Cursor install commands: prose → actual runnable commands - Antigravity/Windsurf extractors: transport http-api → cli, autoCapture true → false, threadSave plugin-capture → manual-export (they are offline extraction CLIs, not live-capture agents) README: - Add Claude Desktop and Browser Extension rows (were in registry but missing from table) - Gemini CLI install: git clone → Extensions Gallery (current path) - Cursor install: generic prose → Marketplace search Bub: - Add missing CHANGELOG 0.2.1 entry (pyproject.toml was bumped but CHANGELOG was not) Gemini CLI (nested submodule): - search-memory: rename "Strong Triggers" → "Strong Signals", add "Contextual signals" section to match shared behavioral guidance Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Tracks the 0.1.4 release: search signal alignment + proactive save. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a canonical integrations registry and plugin development guide, standardizes proactive behavioral guidance across skills and plugins, implements an OpenClaw Context Engine path with refactored hooks/exports and API-first client thread methods, and updates multiple plugin versions, docs, and CLI/installation entries. Changes
Sequence Diagram(s)sequenceDiagram
participant Runtime as OpenClaw Runtime
participant CE as Nowledge‑Mem CE (createNowledgeMemContextEngineFactory)
participant State as ceState
participant Client as NowledgeMemClient
participant Store as Memory Store / API
Runtime->>CE: bootstrap()
CE->>State: check/set active
State-->>CE: active = true
CE->>Client: fetch Working Memory & cached recalls
Client->>Store: API WM/search requests
Store-->>Client: WM/recalled results
Client-->>CE: returned data cached
Runtime->>CE: assemble(messages)
CE->>CE: build systemPromptAddition (behavioral + WM + recalled)
CE->>Client: optional memory search
Client->>Store: search query
Store-->>Client: results
Client-->>CE: filtered recall
CE-->>Runtime: append systemPromptAddition
Runtime->>CE: afterTurn()
CE->>CE: capture thread buffer & triage decision
CE->>Client: triage/distill & save
Client->>Store: POST /threads or /threads/{id}/append
Store-->>Client: save confirmation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
nowledge-mem-claude-code-plugin/skills/distill-memory/SKILL.md (1)
8-10: Clarify proactive behavior by removing “suggest” framing.Line 8 says “Do not wait to be asked,” but Line 10 still frames this as “When to Suggest,” which weakens the intended autonomous-save policy.
Proposed doc tweak
-## When to Suggest (Moment Detection) +## When to Save (Moment Detection)Based on learnings, memory distillation guidance should proactively save durable decisions/learnings rather than waiting for explicit user prompts.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-claude-code-plugin/skills/distill-memory/SKILL.md` around lines 8 - 10, Update the wording to remove "suggest" framing and make the policy explicitly proactive: change the heading "When to Suggest (Moment Detection)" to "When to Save (Moment Detection)" and replace any phrasing like "Do not wait to be asked" with a clear directive such as "Save proactively when the conversation produces a decision, preference, plan, procedure, learning, or important context" so the SKILL.md text reflects autonomous-save behavior rather than suggestion.nowledge-mem-npx-skills/CHANGELOG.md (1)
5-28: Consider clarifying version release timing.Both v0.6.0 and v0.5.0 are dated 2026-03-23. If these are being released together as part of this PR, consider either:
- Combining them into a single release (v0.6.0)
- Adding a note explaining the batch release
Otherwise, the changelog entries are well-structured and accurately document the new features and alignment with
shared/behavioral-guidance.md.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-npx-skills/CHANGELOG.md` around lines 5 - 28, The changelog lists both versions v0.6.0 and v0.5.0 with the same date (2026-03-23); clarify release timing by either combining the entries into a single 0.6.0 release or add a note that these were batched together on 2026-03-23; update CHANGELOG.md so the headings "## [0.6.0] - 2026-03-23" and "## [0.5.0] - 2026-03-23" reflect the chosen approach and include a brief explanatory sentence if you keep both entries (e.g., "Batch release: v0.5.0 and v0.6.0 published together on 2026-03-23").nowledge-mem-npx-skills/skills/distill-memory/SKILL.md (1)
29-34: Structured save guidance is valuable.The importance ranges (0.8–1.0 for major decisions, 0.5–0.7 for patterns, 0.3–0.4 for minor notes) provide actionable calibration for agents. The Native Plugin footer maintains consistency with other skills.
Note: The Cursor plugin's
distill-memory/SKILL.md(per relevant snippet) uses an abbreviated format without importance ranges. Consider whether to harmonize all plugin skill docs to the same detail level for consistency, or keep platform-specific variations intentional.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-npx-skills/skills/distill-memory/SKILL.md` around lines 29 - 34, The docs add useful structured-save guidance (flags --unit-type, -l, -i with importance ranges 0.8–1.0, 0.5–0.7, 0.3–0.4) but the Cursor plugin’s distill-memory/SKILL.md uses an abbreviated format; update that SKILL.md to either mirror the same structured-save section (explicitly document --unit-type, -l, -i and the importance ranges) or add a short note explaining why the Cursor plugin intentionally omits ranges, so all plugin skill docs are consistent; reference the file "distill-memory/SKILL.md", the flags "--unit-type", "-l", "-i", and the importance ranges when making the change.nowledge-mem-openclaw-plugin/src/context-engine.js (1)
55-67:extractTextis duplicated across three files.This function exists identically in
recall.js,capture.js, and here. Consider extracting to a shared utility module (e.g.,src/utils.js) to reduce duplication.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-openclaw-plugin/src/context-engine.js` around lines 55 - 67, The function extractText is duplicated in extractText (used in context-engine.js) and also in recall.js and capture.js; factor it into a shared utility: create a new exported helper function extractText in a common module (e.g., utils.js) and replace the local implementations by importing that exported extractText where currently defined (refer to the extractText function signature and its callers in context-engine.js, recall.js, and capture.js), ensuring behavior stays identical and updating imports/exports accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@nowledge-mem-alma-plugin/CLAUDE.md`:
- Around line 128-155: The CLAUDE.md Cache Safety section references a
non-existent file postmortem/2026-03-23-system-prompt-cache-breaking-plugins.md;
fix by either (A) removing that reference from the "Cache Safety" section in
CLAUDE.md, or (B) adding the missing postmortem document with the expected
content and filename so the link resolves—search for the reference string
"postmortem/2026-03-23-system-prompt-cache-breaking-plugins.md" and update
CLAUDE.md (section "Cache Safety") or create the postmortem file accordingly.
In `@nowledge-mem-codex-prompts/AGENTS.md`:
- Line 54: Update the AGENTS.md guidance for --unit-type to include the fuller
set of recommended memory categories so agents don’t under-classify memories:
expand the example list around the existing "--unit-type" usage to explicitly
include learning, decision, fact, procedure, event, preference, plan, and
context (in addition to the existing
decision/procedure/learning/preference/event), and add a short note to favor
high-signal memories and use -l labels when they improve retrieval; keep wording
concise and replace the current partial list with the expanded set.
In `@nowledge-mem-openclaw-plugin/src/context-engine.js`:
- Around line 159-162: The module-level ceState.active boolean causes races
across multiple CE instances; change activation to reference-counted so each
createNowledgeMemContextEngineFactory increments a counter and dispose()
decrements it, only toggling the global "active" state when the count goes 0->1
or 1->0. Update ce-state.js to expose e.g.
incrementActive()/decrementActive()/isActive() (or a numeric activeCount) and
replace direct reads/writes of ceState.active in
createNowledgeMemContextEngineFactory, the CE dispose() implementation, and the
other affected areas (the block around the other create factory at the later
lines referenced) to use these increment/decrement helpers so child sessions
don’t flip the shared flag prematurely.
---
Nitpick comments:
In `@nowledge-mem-claude-code-plugin/skills/distill-memory/SKILL.md`:
- Around line 8-10: Update the wording to remove "suggest" framing and make the
policy explicitly proactive: change the heading "When to Suggest (Moment
Detection)" to "When to Save (Moment Detection)" and replace any phrasing like
"Do not wait to be asked" with a clear directive such as "Save proactively when
the conversation produces a decision, preference, plan, procedure, learning, or
important context" so the SKILL.md text reflects autonomous-save behavior rather
than suggestion.
In `@nowledge-mem-npx-skills/CHANGELOG.md`:
- Around line 5-28: The changelog lists both versions v0.6.0 and v0.5.0 with the
same date (2026-03-23); clarify release timing by either combining the entries
into a single 0.6.0 release or add a note that these were batched together on
2026-03-23; update CHANGELOG.md so the headings "## [0.6.0] - 2026-03-23" and
"## [0.5.0] - 2026-03-23" reflect the chosen approach and include a brief
explanatory sentence if you keep both entries (e.g., "Batch release: v0.5.0 and
v0.6.0 published together on 2026-03-23").
In `@nowledge-mem-npx-skills/skills/distill-memory/SKILL.md`:
- Around line 29-34: The docs add useful structured-save guidance (flags
--unit-type, -l, -i with importance ranges 0.8–1.0, 0.5–0.7, 0.3–0.4) but the
Cursor plugin’s distill-memory/SKILL.md uses an abbreviated format; update that
SKILL.md to either mirror the same structured-save section (explicitly document
--unit-type, -l, -i and the importance ranges) or add a short note explaining
why the Cursor plugin intentionally omits ranges, so all plugin skill docs are
consistent; reference the file "distill-memory/SKILL.md", the flags
"--unit-type", "-l", "-i", and the importance ranges when making the change.
In `@nowledge-mem-openclaw-plugin/src/context-engine.js`:
- Around line 55-67: The function extractText is duplicated in extractText (used
in context-engine.js) and also in recall.js and capture.js; factor it into a
shared utility: create a new exported helper function extractText in a common
module (e.g., utils.js) and replace the local implementations by importing that
exported extractText where currently defined (refer to the extractText function
signature and its callers in context-engine.js, recall.js, and capture.js),
ensuring behavior stays identical and updating imports/exports accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 11adbc27-28d6-470e-8727-5de1ceec5aea
📒 Files selected for processing (40)
README.mddocs/PLUGIN_DEVELOPMENT_GUIDE.mdintegrations.jsonnowledge-mem-alma-plugin/CHANGELOG.mdnowledge-mem-alma-plugin/CLAUDE.mdnowledge-mem-alma-plugin/README.mdnowledge-mem-alma-plugin/main.jsnowledge-mem-alma-plugin/manifest.jsonnowledge-mem-bub-plugin/CHANGELOG.mdnowledge-mem-bub-plugin/pyproject.tomlnowledge-mem-bub-plugin/src/nowledge_mem_bub/plugin.pynowledge-mem-claude-code-plugin/skills/distill-memory/SKILL.mdnowledge-mem-codex-prompts/AGENTS.mdnowledge-mem-codex-prompts/distill.mdnowledge-mem-cursor-plugin/skills/distill-memory/SKILL.mdnowledge-mem-cursor-plugin/skills/search-memory/SKILL.mdnowledge-mem-droid-plugin/skills/distill-memory/SKILL.mdnowledge-mem-droid-plugin/skills/search-memory/SKILL.mdnowledge-mem-gemini-clinowledge-mem-npx-skills/CHANGELOG.mdnowledge-mem-npx-skills/README.mdnowledge-mem-npx-skills/skills/check-integration/SKILL.mdnowledge-mem-npx-skills/skills/distill-memory/SKILL.mdnowledge-mem-npx-skills/skills/read-working-memory/SKILL.mdnowledge-mem-npx-skills/skills/save-handoff/SKILL.mdnowledge-mem-npx-skills/skills/save-thread/SKILL.mdnowledge-mem-npx-skills/skills/search-memory/SKILL.mdnowledge-mem-npx-skills/skills/status/SKILL.mdnowledge-mem-openclaw-plugin/CHANGELOG.mdnowledge-mem-openclaw-plugin/CLAUDE.mdnowledge-mem-openclaw-plugin/openclaw.plugin.jsonnowledge-mem-openclaw-plugin/package.jsonnowledge-mem-openclaw-plugin/src/ce-state.jsnowledge-mem-openclaw-plugin/src/client.jsnowledge-mem-openclaw-plugin/src/context-engine.jsnowledge-mem-openclaw-plugin/src/hooks/behavioral.jsnowledge-mem-openclaw-plugin/src/hooks/capture.jsnowledge-mem-openclaw-plugin/src/hooks/recall.jsnowledge-mem-openclaw-plugin/src/index.jsshared/behavioral-guidance.md
The previous list omitted fact, plan, and context, which could steer agents away from valid memory classifications. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The community repo is open source; references to postmortem files in the private parent repo are not resolvable for external contributors. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
nowledge-mem-alma-plugin/CLAUDE.md (1)
12-12:⚠️ Potential issue | 🟡 MinorVersion inconsistency between status and changelog reference.
Line 12 states the current status is "as of v0.6.3", but line 152 references a change "Removed
generated_atin 0.6.4". Either update the version in line 12 to v0.6.4, or clarify the forward-looking nature of the 0.6.4 reference.📝 Suggested fix
Option 1: Update the current version if this documentation reflects v0.6.4
-## Current Status (as of v0.6.3) +## Current Status (as of v0.6.4)Option 2: Clarify the 0.6.4 reference is forward-looking
-- However, avoid embedding per-turn variance (timestamps, random IDs) in injected content. Removed `generated_at` in 0.6.4. +- However, avoid embedding per-turn variance (timestamps, random IDs) in injected content. The `generated_at` field will be removed in v0.6.4.Also applies to: 152-152
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-alma-plugin/CLAUDE.md` at line 12, Update the version reference inconsistency: locate the "Current Status (as of v0.6.3)" heading (line with "Current Status") and either change it to "as of v0.6.4" to match the changelog entry "Removed `generated_at` in 0.6.4" (the changelog line referencing 0.6.4) or modify the changelog entry to indicate it's upcoming (e.g., prefix with "planned" or "upcoming 0.6.4") so the document consistently reflects whether 0.6.4 is released or forward-looking.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@nowledge-mem-alma-plugin/CLAUDE.md`:
- Line 12: Update the version reference inconsistency: locate the "Current
Status (as of v0.6.3)" heading (line with "Current Status") and either change it
to "as of v0.6.4" to match the changelog entry "Removed `generated_at` in 0.6.4"
(the changelog line referencing 0.6.4) or modify the changelog entry to indicate
it's upcoming (e.g., prefix with "planned" or "upcoming 0.6.4") so the document
consistently reflects whether 0.6.4 is released or forward-looking.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 24cc233b-26eb-4d27-888d-7d9e62217854
📒 Files selected for processing (3)
nowledge-mem-alma-plugin/CLAUDE.mdnowledge-mem-codex-prompts/AGENTS.mdnowledge-mem-openclaw-plugin/CLAUDE.md
✅ Files skipped from review due to trivial changes (2)
- nowledge-mem-codex-prompts/AGENTS.md
- nowledge-mem-openclaw-plugin/CLAUDE.md
Conversations now sync to Nowledge Mem during normal use — after 2 minutes of idle, on thread switch, and on quit. Previously the quit hook was the only capture mechanism, but users rarely quit Alma, so threads were effectively never saved. Also broadens write heuristics so casual conversations can produce memory saves (facts, preferences), not just architecture decisions. Adds community CLAUDE.md documenting integrations.json as the canonical plugin registry and its downstream consumers. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
chat.message.didReceive and thread.activated may not exist in all Alma versions — registerEvent silently returns false when unsupported. The live sync now piggybacks on willSend (the only confirmed hook), which fires before each user message. Thread switch is detected by comparing the current threadId with the last seen one. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Two bugs fixed: 1. Alma may only support one handler per event — registering willSend twice likely caused the capture handler to overwrite the recall handler (or vice versa). Now a single willSend does both. 2. context.chat.getActiveThread()/getMessages() may not exist — added try/catch with fallback to accumulating messages from willSend payloads directly. Added diagnostic logging (logger.debug) so failures are visible in Alma logs instead of silently swallowed. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Complete rewrite of capture logic — all message data from hook payloads (willSend/didReceive), never context.chat.getMessages(). Title resolution deferred to flush time via 4-strategy resolveTitle(). Hook registration uses context.events ?? context.hooks (canonical API first). LRU eviction at 20 thread buffers. Idle timer reduced to 7s. Bumps to v0.6.13. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…utive messages Heartbeat-triggered sessions (ctx.trigger === "heartbeat") now early-return in agent_end, before_reset, and after_compaction hooks. The CE path already filtered these; the hook path did not, causing repeated batch appends of repetitive HEARTBEAT_OK content that timed out on large payloads. Also collapses consecutive identical messages before sending to nmem CLI, reducing payload size for any session with repetitive output. Bumps to v0.7.1. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… flush guard Deep review fixes: - Use appendThread after first create (no duplicate threads on incremental flush) - Per-thread idle timers (multiple conversations tracked independently) - didReceive uses extractText (handles array-of-blocks content) - Concurrent flush guard (buf.flushing flag) - Best-effort flush before LRU eviction - Quit handler flushes ALL buffered threads, removes redundant saveActiveThread - manifest.json autoCapture description updated - integrations.json threadSave note updated - README "2 minutes" → "a few seconds" fix(openclaw): clarify cron vs heartbeat filtering in capture comment Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The previous consecutive dedup was ineffective for alternating user/assistant heartbeat patterns (roles differ, so no consecutive match). Replace with a smart repetition detector: when >50% of messages in a session are duplicates (by role+content), collapse to unique messages only. For 20 repetitive heartbeat messages, this produces 2 messages instead of 20. Also discovered that OpenClaw's afterTurn() does not currently pass isHeartbeat to context engines — the CE guard was dead code. The content-based dedup now protects both CE and hook paths. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Stop sending whole conversation batches through argv-sized CLI payloads and append only the unsynced tail instead. Keep CLI-backed memory tools and API-backed thread sync on the same resolved config so long sessions stay reliable in both local and remote mode. Made-with: Cursor
There was a problem hiding this comment.
🧹 Nitpick comments (2)
nowledge-mem-alma-plugin/alma-skill-nowledge-mem.md (1)
1-67: Consider adding explicit Working Memory usage guidance.The skill file thoroughly covers search, save, and thread operations, but Working Memory usage guidance is implicit (tool is available but no heuristic on when to read it). A brief note like "Check Working Memory at session start for daily context" would align with the learning that "skill file must teach agent when and how to... use Working Memory."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-alma-plugin/alma-skill-nowledge-mem.md` around lines 1 - 67, Add an explicit short Working Memory guidance paragraph to the Nowledge Mem skill: state when to check session Working Memory (e.g., at session start and before executing memory writes), what to read from it (daily context, recent user preferences, ephemeral flags), and how it interacts with plugin calls (prefer Working Memory for current-session ephemeral facts; fall back to nowledge_mem_query/nowledge_mem_search for durable recall); update the Response Contract or Query Heuristics section to include a one-line example like "Check Working Memory at session start for daily context" and mention "Working Memory" alongside nowledge_mem_search in the Source examples.nowledge-mem-alma-plugin/main.js (1)
419-430: Consider adding thread save honesty disclosure.The
BEHAVIORAL_GUIDANCEconstant correctly includes proactive save nudge andsourceThreadIdawareness per coding guidelines. However, the canonicalshared/behavioral-guidance.md(lines 104-110) specifies a "Thread Save Honesty" section requiring disclosure about runtime-specific capabilities — specifically "never claim a real transcript import when unsupported."Since Alma supports real thread capture via hooks, this may be intentionally omitted, but consider adding a brief note that thread saves reflect actual captured messages (not handoff summaries) for transparency alignment with the shared guidance.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@nowledge-mem-alma-plugin/main.js` around lines 419 - 430, Add a brief "Thread Save Honesty" disclosure to the BEHAVIORAL_GUIDANCE constant so it states that thread saves reflect actual captured messages (not generated handoff summaries) at runtime; update the BEHAVIORAL_GUIDANCE array (the constant named BEHAVIORAL_GUIDANCE) by inserting one short sentence clarifying that thread captures are real transcripts when sourceThreadId is present and not simulated, matching the shared guidance wording style and tone.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@nowledge-mem-alma-plugin/alma-skill-nowledge-mem.md`:
- Around line 1-67: Add an explicit short Working Memory guidance paragraph to
the Nowledge Mem skill: state when to check session Working Memory (e.g., at
session start and before executing memory writes), what to read from it (daily
context, recent user preferences, ephemeral flags), and how it interacts with
plugin calls (prefer Working Memory for current-session ephemeral facts; fall
back to nowledge_mem_query/nowledge_mem_search for durable recall); update the
Response Contract or Query Heuristics section to include a one-line example like
"Check Working Memory at session start for daily context" and mention "Working
Memory" alongside nowledge_mem_search in the Source examples.
In `@nowledge-mem-alma-plugin/main.js`:
- Around line 419-430: Add a brief "Thread Save Honesty" disclosure to the
BEHAVIORAL_GUIDANCE constant so it states that thread saves reflect actual
captured messages (not generated handoff summaries) at runtime; update the
BEHAVIORAL_GUIDANCE array (the constant named BEHAVIORAL_GUIDANCE) by inserting
one short sentence clarifying that thread captures are real transcripts when
sourceThreadId is present and not simulated, matching the shared guidance
wording style and tone.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e0460c5f-0669-4b6a-aaf3-879cd7a57320
📒 Files selected for processing (15)
CLAUDE.mdintegrations.jsonnowledge-mem-alma-plugin/CHANGELOG.mdnowledge-mem-alma-plugin/CLAUDE.mdnowledge-mem-alma-plugin/README.mdnowledge-mem-alma-plugin/alma-skill-nowledge-mem.mdnowledge-mem-alma-plugin/main.jsnowledge-mem-alma-plugin/manifest.jsonnowledge-mem-alma-plugin/package.jsonnowledge-mem-openclaw-plugin/CHANGELOG.mdnowledge-mem-openclaw-plugin/README.mdnowledge-mem-openclaw-plugin/openclaw.plugin.jsonnowledge-mem-openclaw-plugin/package.jsonnowledge-mem-openclaw-plugin/src/client.jsnowledge-mem-openclaw-plugin/src/hooks/capture.js
✅ Files skipped from review due to trivial changes (6)
- nowledge-mem-openclaw-plugin/package.json
- nowledge-mem-openclaw-plugin/openclaw.plugin.json
- nowledge-mem-alma-plugin/package.json
- nowledge-mem-alma-plugin/CHANGELOG.md
- integrations.json
- nowledge-mem-openclaw-plugin/CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (3)
- nowledge-mem-openclaw-plugin/src/client.js
- nowledge-mem-alma-plugin/manifest.json
- nowledge-mem-alma-plugin/CLAUDE.md
Summary by CodeRabbit
New Features
Improvements
Documentation